Digital Learning Program Development

Program Evaluation


Evaluation is the process of gathering and analyzing data to inform decision-making. Evaluation is different from assessment, which is the process of determining performance against a known standard. Evaluation is also different than research, which is the process of investigating phenomena to derive conclusions or develop theory. Evaluation does, however, use data from assessment and research to support decision making.

Evaluation typically comes in two flavors:

  • Formative evaluation is ongoing, and helps inform day-to-day operations and decision-making processes and typically answers the question “what should we be doing next?”.
  • Summative evaluation tends to occur at the “end” of something and typically leads to questions about adoption or rejection while answering the questions “what have we done and how did it go?”

A high-quality program will include both formative and summative evaluations. Frequent and thorough formative evaluations allow designers to course-correct before a program goes too far off the rails. This is one of the key differentiators between evaluation and research. In a research context, you collect empirical data on an intervention without making judgments about the quality (“this is what happened when we did X”). Evaluation, by contrast, collects data for the purpose of identifying quality interventions (“We did X, Y happened, Y is good”). Researchers also observe without interference, because any changes to the study impact the data. Evaluators are active participants in the process, using the data collected to help drive improvements along the way so the program at the end is informed by issues that arise in the middle.

Evaluation Models

There are several common evaluation models, which can be useful to suggest strategies for beginning an evaluation program.

Shufflebeam’s CIPP Model poses for questions that an evaluator should ask:

  • What should/did we do?
  • How should/did we do it?
  • Are we doing it/Did we do it as planned?
  • Did the program work?

Kirkpatrick’s four-level model is useful in evaluating professional learning models, and can be used to evaluate training over four domains: Reaction (did people feel the training was valuable), Learning (what was learned or not learned), Behavior (did they change their practice after the training), and Results (how did the training impact outcomes).

Data Collection

Data collection for evaluation is similar to data collection for vision, typically involving a series of focus groups, surveys, and rubrics. Having this data from the original visioning process and comparing it can be useful in determining whether there have been any changes. Evaluators may also look at other data such as learning analytics, usage records, test scores, teacher observation scores, etc. with the goal of triangulating both qualitative and quantitative data.

Evaluation, in general, is a partnership between the evaluator and the implementor. It’s important that if you’re evaluating someone else’s project, that they see you as an advocate for the success of their program and know that you’re there to support them.

What To Evaluate

When constructing an evaluation plan, all of the SMART goals in your technology plan should be evaluated to determine if they were effective. Each individual strategy and outcome in the logic model should also be evaluated. It’s also important to evaluate awareness of issues as well - going in to a digital-age learning program, staff may think “they’ve got this”, but then realize they didn’t know what they didn’t know as they progress through. Capturing this data is useful.

Evaluation Results

As stated earlier, evaluation is ongoing in nature. The question of “how did it go” should answer the question of “what should you do next”. Rapid-cycle evaluations (evaluations that happen frequently) help you revise your technology strategic plan frequently to account for changes in vision, technology, time, and budget as well as to shed failing initiatives and bring on new and promising initiatives. Ongoing evaluations also allow you to have an “early warning system” so that you know at any given time where you are relative to your goal, the trajectory you’re heading in, and whether any changes are needed. Evaluation sometimes looks like a report, but may be a dashboard of key metrics, or an ongoing focus group process.

Disseminating Results

There is no set format for evaluation - it should be actionable for all. This may mean producing different products for different audiences - reports for superintendents and boards, presentations for teachers, video stories for parents and students.

Increasingly, dissemination is taking the form of data visualizations. Examples of this are provided by many vendors and providers, such as the Ed-Fi platform. Examples of other data visualizations are available here.